Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 46
Filter
1.
Sensors (Basel) ; 24(5)2024 Feb 22.
Article in English | MEDLINE | ID: mdl-38474939

ABSTRACT

The integration of sensor technology in healthcare has become crucial for disease diagnosis and treatment [...].


Subject(s)
Biomedical Technology , Delivery of Health Care , Humans , Artificial Intelligence
2.
J Big Data ; 10(1): 78, 2023.
Article in English | MEDLINE | ID: mdl-37250233

ABSTRACT

This research analyzes the area required for the conflict resolution between aircraft in two flows impacted by a convective weather cell (CWC). The CWC is introduced as a constrained area, forbidden to flight through, which affects the air traffic flows. Prior the conflict resolution, two flows and their intersection are relocated away from the CWC area (thus enabling circumvention of the CWC), which is followed by a tuning of the relocated flows intersection angle in order to create the minimal size of the conflict zone (CZ-a circular area centered at the intersection of two flows, which provides aircraft enough space to completely resolve the conflict within). Therefore, the essence of the proposed solution is in providing conflict free trajectories for the aircraft in intersecting flows that are affected by the CWC, with the goal of minimizing the CZ size, so the finite occupied airspace for the conflict resolution and the CWC circumvention could be reduced. Compared to the best solutions and current industry practice, this article is focused in reduction of the airspace required for aircraft to aircraft and aircraft to weather conflict resolution, and not to distance travelled, time savings, and fuel consumption minimization. The conducted analysis in the MicrosoftExcel2010 confirmed the relevance of the proposed model and demonstrated variations in efficiency of the utilized airspace. The proposed model's transdisciplinary nature makes it potentially applicable in other fields of study, such as the conflict resolution between unmanned aerial vehicles (UAVs) and fixed objects like buildings. Building on this model and taking in consideration large and complex data sets, such as weather related data and flight data (aircraft position, speed, and altitude), we believe it is possible to conduct more sophisticated analyses that would take advantage of Big Data.

3.
Madima 23 (2023) ; 2023: 1-9, 2023 Oct.
Article in English | MEDLINE | ID: mdl-38288389

ABSTRACT

Unhealthy diet is a top risk factor causing obesity and numerous chronic diseases. To help the public adopt healthy diet, nutrition scientists need user-friendly tools to conduct Dietary Assessment (DA). In recent years, new DA tools have been developed using a smartphone or a wearable device which acquires images during a meal. These images are then processed to estimate calories and nutrients of the consumed food. Although considerable progress has been made, 2D food images lack scale reference and 3D volumetric information. In addition, food must be sufficiently observable from the image. This basic condition can be met when the food is stand-alone (no food container is used) or it is contained in a shallow plate. However, the condition cannot be met easily when a bowl is used. The food is often occluded by the bowl edge, and the shape of the bowl may not be fully determined from the image. However, bowls are the most utilized food containers by billions of people in many parts of the world, especially in Asia and Africa. In this work, we propose to premeasure plates and bowls using a marked adhesive strip before a dietary study starts. This simple procedure eliminates the use of a scale reference throughout the DA study. In addition, we use mathematical models and image processing to reconstruct the bowl in 3D. Our key idea is to estimate how full the bowl is rather than how much food is (in either volume or weight) in the bowl. This idea reduces the effect of occlusion. The experimental data have shown satisfactory results of our methods which enable accurate DA studies using both plates and bowls with reduced burden on research participants.

4.
Sensors (Basel) ; 22(20)2022 Oct 20.
Article in English | MEDLINE | ID: mdl-36298356

ABSTRACT

An unhealthy diet is strongly linked to obesity and numerous chronic diseases. Currently, over two-thirds of American adults are overweight or obese. Although dietary assessment helps people improve nutrition and lifestyle, traditional methods for dietary assessment depend on self-report, which is inaccurate and often biased. In recent years, as electronics, information, and artificial intelligence (AI) technologies advanced rapidly, image-based objective dietary assessment using wearable electronic devices has become a powerful approach. However, research in this field has been focused on the developments of advanced algorithms to process image data. Few reports exist on the study of device hardware for the particular purpose of dietary assessment. In this work, we demonstrate that, with the current hardware design, there is a considerable risk of missing important dietary data owing to the common use of rectangular image screen and fixed camera orientation. We then present two designs of a new camera system to reduce data loss by generating circular images using rectangular image sensor chips. We also present a mechanical design that allows the camera orientation to be adjusted, adapting to differences among device wearers, such as gender, body height, and so on. Finally, we discuss the pros and cons of rectangular versus circular images with respect to information preservation and data processing using AI algorithms.


Subject(s)
Nutrition Assessment , Wearable Electronic Devices , Adult , Humans , Artificial Intelligence , Diet , Algorithms
5.
Article in English | MEDLINE | ID: mdl-35544492

ABSTRACT

Although deep learning has achieved great success in many computer vision tasks, its performance relies on the availability of large datasets with densely annotated samples. Such datasets are difficult and expensive to obtain. In this article, we focus on the problem of learning representation from unlabeled data for semantic segmentation. Inspired by two patch-based methods, we develop a novel self-supervised learning framework by formulating the jigsaw puzzle problem as a patch-wise classification problem and solving it with a fully convolutional network. By learning to solve a jigsaw puzzle comprising 25 patches and transferring the learned features to semantic segmentation task, we achieve a 5.8% point improvement on the Cityscapes dataset over the baseline model initialized from random values. It is noted that we use only about 1/6 training images of Cityscapes in our experiment, which is designed to imitate the real cases where fully annotated images are usually limited to a small number. We also show that our self-supervised learning method can be applied to different datasets and models. In particular, we achieved competitive performance with the state-of-the-art methods on the PASCAL VOC2012 dataset using significantly fewer time costs on pretraining.

6.
Sensors (Basel) ; 22(4)2022 Feb 15.
Article in English | MEDLINE | ID: mdl-35214399

ABSTRACT

Knowing the amounts of energy and nutrients in an individual's diet is important for maintaining health and preventing chronic diseases. As electronic and AI technologies advance rapidly, dietary assessment can now be performed using food images obtained from a smartphone or a wearable device. One of the challenges in this approach is to computationally measure the volume of food in a bowl from an image. This problem has not been studied systematically despite the bowl being the most utilized food container in many parts of the world, especially in Asia and Africa. In this paper, we present a new method to measure the size and shape of a bowl by adhering a paper ruler centrally across the bottom and sides of the bowl and then taking an image. When observed from the image, the distortions in the width of the paper ruler and the spacings between ruler markers completely encode the size and shape of the bowl. A computational algorithm is developed to reconstruct the three-dimensional bowl interior using the observed distortions. Our experiments using nine bowls, colored liquids, and amorphous foods demonstrate high accuracy of our method for food volume estimation involving round bowls as containers. A total of 228 images of amorphous foods were also used in a comparative experiment between our algorithm and an independent human estimator. The results showed that our algorithm overperformed the human estimator who utilized different types of reference information and two estimation methods, including direct volume estimation and indirect estimation through the fullness of the bowl.


Subject(s)
Diet , Energy Intake , Algorithms , Food , Humans , Smartphone
7.
Biomed Tech (Berl) ; 67(1): 53-60, 2022 Feb 23.
Article in English | MEDLINE | ID: mdl-35073618

ABSTRACT

The macro and micro design is essential to the biomechanical performance of a short implant. In this study, the implant thread parameters of short implants used in edentulous maxillae will be discussed. The aim of the study is to analyse biomechanical distinctions in different thread parameters over short implants by applying the vertical or oblique load of 130 N on dental prosthesis. A 6*5 mm implant will be used in posterior maxillae arch, where the molar region locates. The CAD model has been assembled by three parts, a crown, an implant system and a jaw. By applying the vertical or oblique load to the crown, the Von-Mises stresses of cortical bone and trabecular bone will be evaluated in pairs along the lines v1-v2 & a1-a2. The results showed that the reverse buttress thread would induce more stresses in cancellous bone whereas the buttress did the opposite. The trapezoidal thread (V-thread) is more favourable than the reverse buttress thread in accordance to the FEA result. The rectangle threads will induce more uneven stresses in cancellous bones.


Subject(s)
Dental Implants , Software , Biomechanical Phenomena , Computer Simulation , Dental Prosthesis Design , Dental Stress Analysis , Finite Element Analysis , Stress, Mechanical
8.
Electronics (Basel) ; 10(13)2021 Jul.
Article in English | MEDLINE | ID: mdl-34552763

ABSTRACT

It is well known that many chronic diseases are associated with unhealthy diet. Although improving diet is critical, adopting a healthy diet is difficult despite its benefits being well understood. Technology is needed to allow an assessment of dietary intake accurately and easily in real-world settings so that effective intervention to manage being overweight, obesity, and related chronic diseases can be developed. In recent years, new wearable imaging and computational technologies have emerged. These technologies are capable of performing objective and passive dietary assessments with a much simplified procedure than traditional questionnaires. However, a critical task is required to estimate the portion size (in this case, the food volume) from a digital image. Currently, this task is very challenging because the volumetric information in the two-dimensional images is incomplete, and the estimation involves a great deal of imagination, beyond the capacity of the traditional image processing algorithms. In this work, we present a novel Artificial Intelligent (AI) system to mimic the thinking of dietitians who use a set of common objects as gauges (e.g., a teaspoon, a golf ball, a cup, and so on) to estimate the portion size. Specifically, our human-mimetic system "mentally" gauges the volume of food using a set of internal reference volumes that have been learned previously. At the output, our system produces a vector of probabilities of the food with respect to the internal reference volumes. The estimation is then completed by an "intelligent guess", implemented by an inner product between the probability vector and the reference volume vector. Our experiments using both virtual and real food datasets have shown accurate volume estimation results.

9.
Front Artif Intell ; 4: 644712, 2021.
Article in English | MEDLINE | ID: mdl-33870184

ABSTRACT

Malnutrition, including both undernutrition and obesity, is a significant problem in low- and middle-income countries (LMICs). In order to study malnutrition and develop effective intervention strategies, it is crucial to evaluate nutritional status in LMICs at the individual, household, and community levels. In a multinational research project supported by the Bill & Melinda Gates Foundation, we have been using a wearable technology to conduct objective dietary assessment in sub-Saharan Africa. Our assessment includes multiple diet-related activities in urban and rural families, including food sources (e.g., shopping, harvesting, and gathering), preservation/storage, preparation, cooking, and consumption (e.g., portion size and nutrition analysis). Our wearable device ("eButton" worn on the chest) acquires real-life images automatically during wake hours at preset time intervals. The recorded images, in amounts of tens of thousands per day, are post-processed to obtain the information of interest. Although we expect future Artificial Intelligence (AI) technology to extract the information automatically, at present we utilize AI to separate the acquired images into two binary classes: images with (Class 1) and without (Class 0) edible items. As a result, researchers need only to study Class-1 images, reducing their workload significantly. In this paper, we present a composite machine learning method to perform this classification, meeting the specific challenges of high complexity and diversity in the real-world LMIC data. Our method consists of a deep neural network (DNN) and a shallow learning network (SLN) connected by a novel probabilistic network interface layer. After presenting the details of our method, an image dataset acquired from Ghana is utilized to train and evaluate the machine learning system. Our comparative experiment indicates that the new composite method performs better than the conventional deep learning method assessed by integrated measures of sensitivity, specificity, and burden index, as indicated by the Receiver Operating Characteristic (ROC) curve.

10.
IEEE Trans Cybern ; 51(7): 3510-3523, 2021 Jul.
Article in English | MEDLINE | ID: mdl-31056530

ABSTRACT

Multilabel feature extraction (FE) is an effective preprocessing step to cope with some possible irrelevant, redundant, and noisy features, to reduce computational costs and even improve classification performance. Original normalized cross-covariance operator represents a kernel-based nonlinear dependence measure between features and labels, whose empirical estimator is formulated as a trace operation including two inverse matrices of feature and label kernels with a regularization constant. Due to such a complicated expression, it is impossible to derive an eigenvalue problem for linear FE directly. In this paper, we approximate this measure using Moore-Penrose inverse matrix, linear kernel for feature space, and delta kernel for label space, and then symmetrize the entire matrix in the trace operation, resulting in an effective approximated and symmetrized representation. According to orthonormal projection direction constraints, maximizing such a modified form induces a novel eigenvalue problem for multilabel linear FE. Experiments on 12 data sets illustrate that our proposed method works the best, compared with seven existing FE techniques, according to eight multilabel classification performance metrics and three statistical tests.

11.
Public Health Nutr ; 24(6): 1248-1255, 2021 04.
Article in English | MEDLINE | ID: mdl-32854804

ABSTRACT

OBJECTIVE: Accurate measurements of food volume and density are often required as 'gold standards' for calibration of image-based dietary assessment and food database development. Currently, there is no specialised laboratory instrument for these measurements. We present the design of a new volume of density (VD) meter to bridge this technological gap. DESIGN: Our design consists of a turntable, a load sensor, a set of cameras and lights installed on an arc-shaped stationary support, and a microcomputer. It acquires an array of food images, reconstructs a 3D volumetric model, weighs the food and calculates both food volume and density, all in an automatic process controlled by the microcomputer. To adapt to the complex shapes of foods, a new food surface model, derived from the electric field of charged particles, is developed for 3D point cloud reconstruction of either convex or concave food surfaces. RESULTS: We conducted two experiments to evaluate the VD meter. The first experiment utilised computer-synthesised 3D objects with prescribed convex and concave surfaces of known volumes to investigate different food surface types. The second experiment was based on actual foods with different shapes, colours and textures. Our results indicated that, for synthesised objects, the measurement error of the electric field-based method was <1 %, significantly lower compared with traditional methods. For real-world foods, the measurement error depended on the types of food volumes (detailed discussion included). The largest error was approximately 5 %. CONCLUSION: The VD meter provides a new electronic instrument to support advanced research in nutrition science.


Subject(s)
Electronics , Food , Calibration , Humans
12.
Sci Rep ; 10(1): 21014, 2020 12 03.
Article in English | MEDLINE | ID: mdl-33273503

ABSTRACT

This paper reports on the use of machine learning to delineate data harnessed by fiber-optic distributed acoustic sensors (DAS) using fiber with enhanced Rayleigh backscattering to recognize vibration events induced by human locomotion. The DAS used in this work is based on homodyne phase-sensitive optical time-domain reflectometry (φ-OTDR). The signal-to-noise ratio (SNR) of the DAS was enhanced using femtosecond laser-induced artificial Rayleigh scattering centers in single-mode fiber cores. Both supervised and unsupervised machine-learning algorithms were explored to identify people and specific events that produce acoustic signals. Using convolutional deep neural networks, the supervised machine learning scheme achieved over 76.25% accuracy in recognizing human identities. Conversely, the unsupervised machine learning scheme achieved over 77.65% accuracy in recognizing events and human identities through acoustic signals. Through integrated efforts on both sensor device innovation and machine learning data analytics, this paper shows that the DAS technique can be an effective security technology to detect and to identify highly similar acoustic events with high spatial resolution and high accuracies.


Subject(s)
Biometric Identification/methods , Fiber Optic Technology/methods , Locomotion , Machine Learning , Acoustics/instrumentation , Biometric Identification/instrumentation , Fiber Optic Technology/instrumentation , Humans
13.
Opt Express ; 28(19): 27277-27292, 2020 Sep 14.
Article in English | MEDLINE | ID: mdl-32988024

ABSTRACT

This paper presents an integrated technical framework to protect pipelines against both malicious intrusions and piping degradation using a distributed fiber sensing technology and artificial intelligence. A distributed acoustic sensing (DAS) system based on phase-sensitive optical time-domain reflectometry (φ-OTDR) was used to detect acoustic wave propagation and scattering along pipeline structures consisting of straight piping and sharp bend elbow. Signal to noise ratio of the DAS system was enhanced by femtosecond induced artificial Rayleigh scattering centers. Data harnessed by the DAS system were analyzed by neural network-based machine learning algorithms. The system identified with over 85% accuracy in various external impact events, and over 94% accuracy for defect identification through supervised learning and 71% accuracy through unsupervised learning.

14.
Article in English | MEDLINE | ID: mdl-32191886

ABSTRACT

Semantic segmentation is a key step in scene understanding for autonomous driving. Although deep learning has significantly improved the segmentation accuracy, current highquality models such as PSPNet and DeepLabV3 are inefficient given their complex architectures and reliance on multi-scale inputs. Thus, it is difficult to apply them to real-time or practical applications. On the other hand, existing real-time methods cannot yet produce satisfactory results on small objects such as traffic lights, which are imperative to safe autonomous driving. In this paper, we improve the performance of real-time semantic segmentation from two perspectives, methodology and data. Specifically, we propose a real-time segmentation model coined Narrow Deep Network (NDNet) and build a synthetic dataset by inserting additional small objects into the training images. The proposed method achieves 65.7% mean intersection over union (mIoU) on the Cityscapes test set with only 8.4G floatingpoint operations (FLOPs) on 1024×2048 inputs. Furthermore, by re-training the existing PSPNet and DeepLabV3 models on our synthetic dataset, we obtained an average 2% mIoU improvement on small objects.

15.
Front Nutr ; 7: 519444, 2020.
Article in English | MEDLINE | ID: mdl-33521029

ABSTRACT

Despite the extreme importance of food intake in human health, it is currently difficult to conduct an objective dietary assessment without individuals' self-report. In recent years, a passive method utilizing a wearable electronic device has emerged. This device acquires food images automatically during the eating process. These images are then analyzed to estimate intakes of calories and nutrients, assisted by advanced computational algorithms. Although this passive method is highly desirable, it has been thwarted by the requirement of a fiducial marker which must be present in the image for a scale reference. The importance of this scale reference is analogous to the importance of the scale bar in a map which determines distances or areas in any geological region covered by the map. Likewise, the sizes or volumes of arbitrary foods on a dining table covered by an image cannot be determined without the scale reference. Currently, the fiducial marker (often a checkerboard card) serves as the scale reference which must be present on the table before taking pictures, requiring human efforts to carry, place and retrieve the fiducial marker manually. In this work, we demonstrate that the fiducial marker can be eliminated if an individual's dining location is fixed and a one-time calibration using a circular plate of known size is performed. When the individual uses another circular plate of an unknown size, our algorithm estimates its radius using the range of pre-calibrated distances between the camera and the plate from which the desired scale reference is determined automatically. Our comparative experiment indicates that the mean absolute percentage error of the proposed estimation method is ~10.73%. Although this error is larger than that of the manual method of 6.68% using a fiducial marker on the table, the new method has a distinctive advantage of eliminating the manual procedure and automatically generating the scale reference.

16.
Public Health Nutr ; 22(7): 1168-1179, 2019 05.
Article in English | MEDLINE | ID: mdl-29576027

ABSTRACT

OBJECTIVE: To develop an artificial intelligence (AI)-based algorithm which can automatically detect food items from images acquired by an egocentric wearable camera for dietary assessment. DESIGN: To study human diet and lifestyle, large sets of egocentric images were acquired using a wearable device, called eButton, from free-living individuals. Three thousand nine hundred images containing real-world activities, which formed eButton data set 1, were manually selected from thirty subjects. eButton data set 2 contained 29 515 images acquired from a research participant in a week-long unrestricted recording. They included both food- and non-food-related real-life activities, such as dining at both home and restaurants, cooking, shopping, gardening, housekeeping chores, taking classes, gym exercise, etc. All images in these data sets were classified as food/non-food images based on their tags generated by a convolutional neural network. RESULTS: A cross data-set test was conducted on eButton data set 1. The overall accuracy of food detection was 91·5 and 86·4 %, respectively, when one-half of data set 1 was used for training and the other half for testing. For eButton data set 2, 74·0 % sensitivity and 87·0 % specificity were obtained if both 'food' and 'drink' were considered as food images. Alternatively, if only 'food' items were considered, the sensitivity and specificity reached 85·0 and 85·8 %, respectively. CONCLUSIONS: The AI technology can automatically detect foods from low-quality, wearable camera-acquired real-world egocentric images with reasonable accuracy, reducing both the burden of data processing and privacy concerns.


Subject(s)
Artificial Intelligence , Diet Records , Dietetics/instrumentation , Image Processing, Computer-Assisted , Photography/instrumentation , Activities of Daily Living , Algorithms , Humans
17.
Int J Intell Robot Appl ; 3(3): 298-313, 2019 Sep.
Article in English | MEDLINE | ID: mdl-33283042

ABSTRACT

Functional electrical stimulation (FES) has recently been proposed as a supplementary torque assist in lower-limb powered exoskeletons for persons with paraplegia. In the combined system, also known as a hybrid neuroprosthesis, both FES-assist and the exoskeleton act to generate lower-limb torques to achieve standing and walking functions. Due to this actuator redundancy, we are motivated to optimally allocate FES-assist and exoskeleton torque based on a performance index that penalizes FES overuse to minimize muscle fatigue while also minimizing regulation or tracking errors. Traditional optimal control approaches need a system model to optimize; however, it is often difficult to formulate a musculoskeletal model that accurately predicts muscle responses due to FES. In this paper, we use a novel identification and control structure that contains a recurrent neural network (RNN) and several feedforward neural networks (FNNs). The RNN is trained by supervised learning to identify the system dynamics, while the FNNs are trained by a reinforcement learning method to provide sub-optimal control actions. The output layer of each FNN has its unique activation functions, so that the asymmetric constraint of FES and the symmetric constraint of exoskeleton motor control input can be realized. This new structure is experimentally validated on a seated human participant using a single joint hybrid neuroprosthesis.

18.
Neurocomputing (Amst) ; 285: 1-9, 2018 Apr 12.
Article in English | MEDLINE | ID: mdl-29755210

ABSTRACT

Cervical auscultation is a method for assessing swallowing performance. However, its ability to serve as a classification tool for a practical clinical assessment method is not fully understood. In this study, we utilized neural network classification methods in the form of Deep Belief networks in order to classify swallows. We specifically utilized swallows that did not result in clinically significant aspiration and classified them on whether they originated from healthy subjects or unhealthy patients. Dual-axis swallowing vibrations from 1946 discrete swallows were recorded from 55 healthy and 53 unhealthy subjects. The Fourier transforms of both signals were used as inputs to the networks of various sizes. We found that single and multi-layer Deep Belief networks perform nearly identically when analyzing only a single vibration signal. However, multi-layered Deep Belief networks demonstrated approximately a 5% to 10% greater accuracy and sensitivity when both signals were analyzed concurrently, indicating that higher-order relationships between these vibrations are important for classification and assessment.

19.
Measurement (Lond) ; 109: 316-325, 2017 Oct.
Article in English | MEDLINE | ID: mdl-29203949

ABSTRACT

Wireless Power Transfer (WPT) and wireless data communication are both important problems of research with various applications, especially in medicine. However, these two problems are usually studied separately. In this work, we present a joint study of both problems. Most medical electronic devices, such as smart implants, must have both a power supply to allow continuous operation and a communication link to pass information. Traditionally, separate wireless channels for power transfer and communication are utilized, which complicate the system structure, increase power consumption and make device miniaturization difficult. A more effective approach is to use a single wireless link with both functions of delivering power and passing information. We present a design of such a wireless link in which power and data travel in opposite directions. In order to aggressively miniaturize the implant and reduce power consumption, we eliminate the traditional multi-bit Analog-to-Digital Converter (ADC), digital memory and data transmission circuits all together. Instead, we use a pulse stream, which is obtained from the original biological signal, by a sigma-delta converter and an edge detector, to alter the load properties of the WPT channel. The resulting WPT signal is synchronized with the load changes therefore requiring no memory elements to record inter-pulse intervals. We take advantage of the high sensitivity of the resonant WPT to the load change, and the system dynamic response is used to transfer each pulse. The transient time of the WPT system is analyzed using the coupling mode theory (CMT). Our experimental results show that the memoryless approach works well for both power delivery and data transmission, providing a new wireless platform for the design of future miniaturized medical implants.

20.
IEEE Int Conf Rehabil Robot ; 2017: 1073-1078, 2017 07.
Article in English | MEDLINE | ID: mdl-28813964

ABSTRACT

Brain-computer interfaces (BCIs) largely augment human capabilities by translating brain wave signals into feasible commands to operate external devices. However, many issues face the development of BCIs such as the low classification accuracy of brain signals and the tedious human-learning procedures. To solve these problems, we propose to use signals associated with eye saccades and blinks to control a BCI interface. By extracting existing physiological eye signals, the user does not need to adapt his/her brain waves to the device. Furthermore, using saccade signals to control an external device frees the limbs to perform other tasks. In this research, we use two electrodes placed on top of the left and right ears of thirteen participants. Then we use Independent Component Analysis (ICA) to extract meaningful EEG signals associated with eye movements. A sliding-window technique was implemented to collect relevant features. Finally, we classified the features as horizontal or blink eye movements using KNN and SVM. We were able to achieve a mean classification accuracy of about 97%. The two electrodes were then integrated with off-the-shelf earbuds to control a wheelchair. The earbuds can generate voice cues to indicate when to rotate the eyeballs to certain locations (i.e., left or right) or blink, so that the user can select directional commands to drive the wheelchair. In addition, through properly designing the contents of voice menus, we can generate as many commands as possible, even though we only have limited numbers of states of the identified eye saccade movements.


Subject(s)
Brain-Computer Interfaces , Electroencephalography/methods , Saccades/physiology , Signal Processing, Computer-Assisted/instrumentation , Wheelchairs , Adult , Cues , Equipment Design , Female , Humans , Male , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...